21 research outputs found
Multispecies Fruit Flower Detection Using a Refined Semantic Segmentation Network
In fruit production, critical crop management decisions are guided by bloom intensity, i.e., the number of flowers present in an orchard. Despite its importance, bloom intensity is still typically estimated by means of human visual inspection. Existing automated computer vision systems for flower identification are based on hand-engineered techniques that work only under specific conditions and with limited performance. This letter proposes an automated technique for flower identification that is robust to uncontrolled environments and applicable to different flower species. Our method relies on an end-to-end residual convolutional neural network (CNN) that represents the state-of-the-art in semantic segmentation. To enhance its sensitivity to flowers, we fine-tune this network using a single dataset of apple flower images. Since CNNs tend to produce coarse segmentations, we employ a refinement method to better distinguish between individual flower instances. Without any preprocessing or dataset-specific training, experimental results on images of apple, peach, and pear flowers, acquired under different conditions demonstrate the robustness and broad applicability of our method
Apple Flower Detection Using Deep Convolutional Networks
To optimize fruit production, a portion of the flowers and fruitlets of apple trees must be removed early in the growing season. The proportion to be removed is determined by the bloom intensity, i.e., the number of flowers present in the orchard. Several automated computer vision systems have been proposed to estimate bloom intensity, but their overall performance is still far from satisfactory even in relatively controlled environments. With the goal of devising a technique for flower identification which is robust to clutter and to changes in illumination, this paper presents a method in which a pre-trained convolutional neural network is fine-tuned to become specially sensitive to flowers. Experimental results on a challenging dataset demonstrate that our method significantly outperforms three approaches that represent the state of the art in flower detection, with recall and precision rates higher than 90%. Moreover, a performance assessment on three additional datasets previously unseen by the network, which consist of different flower species and were acquired under different conditions, reveals that the proposed method highly surpasses baseline approaches in terms of generalization capability
Automated identification and quantification of microfibres and microplastics
The ubiquitous presence of microlitter (ML), precisely microplastics (MP) and microfibres (MF) in the global environment is of growing concern for science, and society in general. Reliable methods are urgently needed for the identification and quantification of these emerging environmental pollutants. Recently a rapid Fourier transform infrared (FTIR) imaging pipeline was developed for automated identification and quantification of MP. However, although the usefulness for the quantification of MP could already be shown in several studies, microfibres could not be targeted so far by the developed analysis pipeline. In this study we present a novel approach for the simultaneous identification and quantification of MP and MF. By concentrating the sample on membrane filters and applying a BaF2 window on top of the filter, all objects â including MF â are fixed in the focal plane of the FTIR microscope. Furthermore, the analysis pipeline was augmented with algorithms which take into consideration the filamentous structure of MF. The novel analysis pipeline now allows to separate MP and MF via a preselection of fibres from the dataset by object size and shape. MP and MF are subsequently further investigated for specific polymer types and lengths/sizes. After parameter optimization the newly developed analysis approach was applied to archived samples from previous studies on treated waste water. The results were compared with respect to the original detected polymer types and numbers, but also considered MF detection
Identification and Quantification of Microfibers and Microparticles via FTIR imaging and Automated Analysis
The ubiquitous presence of small plastic particles and fibers ( To overcome this limitation we present a novel approach to measure, characterize and quantify microfibers and -particles concentrated onto membrane filters within one measurement. FTIR imaging was extended to fibers via a specialized sample handling to fix the fibers in the focal plane of the instrument. Further, after the automated analysis, an analytical procedure via Image Analysis was developed. Potential fibers were preselected from the dataset and further investigated. Different parameters allowed a detailed investigation with regard to object shape to differentiate small particles from short fibers. Afterwards, the fibers are investigated for each individual polymer type, their length and abundances. Via a filament closing process small gaps within larger fibers are closed to exclude an overestimation by overlaying objects. Subsequently the procedure was applied on datasets from studies of treated waste water to investigated type and number of micro-fibers and âparticles within the effluent of the treatment plans. As first result in all of the investigated samples cellulose fibers were found. Moreover, synthetic fibers of different polymer types were successfully identified and consisted mainly of polyethylene, polypropylene, polyacrylates and polyamide. With this dataset we could show that we developed a versatile and easily applicable tool for the investigation of micro litter. The automated analysis of microfibers and âparticles within one measurement further improved the necessary harmonization of microplastic research to lead to the implementation of standardized operational protocols (SOP) in the future
FreeLabel: A Publicly Available Annotation Tool Based on Freehand Traces
Large-scale annotation of image segmentation datasets is often prohibitively
expensive, as it usually requires a huge number of worker hours to obtain
high-quality results. Abundant and reliable data has been, however, crucial for
the advances on image understanding tasks achieved by deep learning models. In
this paper, we introduce FreeLabel, an intuitive open-source web interface that
allows users to obtain high-quality segmentation masks with just a few freehand
scribbles, in a matter of seconds. The efficacy of FreeLabel is quantitatively
demonstrated by experimental results on the PASCAL dataset as well as on a
dataset from the agricultural domain. Designed to benefit the computer vision
community, FreeLabel can be used for both crowdsourced or private annotation
and has a modular structure that can be easily adapted for any image dataset.Comment: Accepted and presented at 2019 IEEE Winter Conference on Applications
of Computer Vision (WACV). 10 page